incentive mechanism design
Inference Aided Reinforcement Learning for Incentive Mechanism Design in Crowdsourcing
Incentive mechanisms for crowdsourcing are designed to incentivize financially self-interested workers to generate and report high-quality labels. Existing mechanisms are often developed as one-shot static solutions, assuming a certain level of knowledge about worker models (expertise levels, costs for exerting efforts, etc.). In this paper, we propose a novel inference aided reinforcement mechanism that acquires data sequentially and requires no such prior assumptions. Specifically, we first design a Gibbs sampling augmented Bayesian inference algorithm to estimate workers' labeling strategies from the collected labels at each step. Then we propose a reinforcement incentive learning (RIL) method, building on top of the above estimates, to uncover how workers respond to different payments. RIL dynamically determines the payment without accessing any ground-truth labels. We theoretically prove that RIL is able to incentivize rational workers to provide high-quality labels both at each step and in the long run. Empirical results show that our mechanism performs consistently well under both rational and non-fully rational (adaptive learning) worker models. Besides, the payments offered by RIL are more robust and have lower variances compared to existing one-shot mechanisms.
Reviews: Inference Aided Reinforcement Learning for Incentive Mechanism Design in Crowdsourcing
Summary: In this paper, the authors explore the problem of data collecting using crowdsourcing. In the setting of the paper, each task is a labeling task with binary labels, and workers are strategic in choosing effort levels and reporting strategies that maximize their utility. The true label for each task and workers' parameters are all unknown to the requester. The requester's goal is to learn how to decide the payment and how to aggregate the collected labels by learning from workers' past answers. The authors' proposed approach is a combination of incentive design, Bayesian inference, and reinforcement learning.
- Information Technology > Communications > Social Media > Crowdsourcing (0.62)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (0.62)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.40)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Model-Based Reasoning (0.40)
Inference Aided Reinforcement Learning for Incentive Mechanism Design in Crowdsourcing
Hu, Zehong, Liang, Yitao, Zhang, Jie, Li, Zhao, Liu, Yang
Incentive mechanisms for crowdsourcing are designed to incentivize financially self-interested workers to generate and report high-quality labels. Existing mechanisms are often developed as one-shot static solutions, assuming a certain level of knowledge about worker models (expertise levels, costs for exerting efforts, etc.). In this paper, we propose a novel inference aided reinforcement mechanism that acquires data sequentially and requires no such prior assumptions. Specifically, we first design a Gibbs sampling augmented Bayesian inference algorithm to estimate workers' labeling strategies from the collected labels at each step. Then we propose a reinforcement incentive learning (RIL) method, building on top of the above estimates, to uncover how workers respond to different payments.
- Information Technology > Communications > Social Media > Crowdsourcing (0.65)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.63)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Model-Based Reasoning (0.40)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (0.40)